Article
· Jun 3 8m read

kubernetes architecture and configuration - Part 2

In our previous article, we have explored the most common Kubernetes components:

  • We started with the pods and the services we needed to communicate with each other.
  • Then, we examined the  Ingress component used to Route traffic into the cluster.
  • We also skimmed through an external configuration using ConfigMaps and Secrets.
  • Afterward, we analyzed Data persistence with the help of Volumes.
  • Finally, we took a quick look at pod blueprints with such replicating mechanisms as Deployments and StatefulSets (the latter is employed specifically for such stateful applications as databases).

In this article, we will explore Kubernetes architecture and configuration.

kubernetes architecture

The Kubernetes cluster consists of at least one master node, connected to one or more worker nodes with each node having a cubelet process running on it. Cubelet is a Kubernetes process that allows clusters to talk to each other. It is built around a cluster-centric model, designed to manage containerized applications across a set of nodes.

At its core, Kubernetes comprises key components mentioned below:

1-Master Node

Masternode runs the following Kubernetes processes:

  • API Server: This component serves as the entry point for all administrative tasks and client interactions with the Kubernetes cluster. It validates and configures data through various API objects, including Pods, Services, Deployments, and ConfigMaps. Below, you can find different Kubernetes clients used to talk with the API Server:
    • UI (User Interface):
      • Kubernetes provides a web-based graphical user interface (UI), also known as the Kubernetes Dashboard. The Dashboard lets users view and manage various aspects of the cluster, such as Pods, Deployments, Services, and more, through a visual interface. Users can perform such actions as creating, deleting, and scaling resources, as well as monitoring cluster health and resource utilization. The Dashboard is a convenient option for users who prefer a graphical interface for managing their Kubernetes clusters.
    • API (Application Programming Interface):
      • The Kubernetes API is the primary interface for interacting with the cluster programmatically. It exposes a RESTful API that lets users perform operations on Kubernetes resources via HTTP requests. Users can employ tools and libraries to interact with the API directly, allowing for automation, integration with other systems, and custom development of Kubernetes management solutions.
    • CLI (Command-Line Interface):
      • The Kubernetes CLI, commonly referred to as kubectl, is a command-line tool that provides a convenient way to interact with the Kubernetes API from the command line. Users can utilize kubectl to perform various operations on Kubernetes resources, including creating, deleting, and updating Pods, Deployments, Services, ConfigMaps, and more. Kubectl provides a rich set of commands and options for managing Kubernetes clusters, making it a versatile tool for both administrators and developers.
  • Scheduler: The scheduler is responsible for assigning Pods to nodes. It looks for newly created Pods with no assigned node and selects an appropriate node for them based on resource requirements, policies, and other constraints.
  • Controller Manager: The controller manager runs controller processes to regulate the state of the cluster. For instance, the ReplicaSet controller ensures that the desired number of Pod replicas is always running, while the Deployment controller manages updates and rollouts of application deployments.
  • etcd: etcd is a key-value storage that holds the current state of the Kubernetes cluster with all the configurations and status data. It ensures consistency and fault tolerance, keeping critical information such as cluster configuration, state, and metadata.

2-Worker Node

Each worker node has various containers of different applications deployed on it, so depending on how the workload is distributed, you will have a different number of Docker containers running on worker nodes. Worker nodes are where the actual work is happening, so it is a place where your applications are running.

3-Virtual Networking

  • Pod Networking: Kubernetes assigns each Pod a unique IP address, allowing containers within the Pod to communicate with each other over the network. Pods share the network namespace, including the IP address and network ports.
  • Service Networking: Kubernetes Services provide a stable endpoint for accessing a set of Pods. Each Service is assigned a virtual IP address and DNS name, enabling clients to access the Service regardless of the underlying Pods' IP addresses or locations.

Kubernetes Configurations:

All the configuration in the Kubernetes cluster goes through a master node with the process called API server. We mentioned it briefly earlier. Kubernetes clients could be a UI, for example, a Kubernetes dashboard, an API, for instance, a script or a curl command, or a command line tool like cubectl. They all talk to the API server, which is the only entry point into the cluster, and send their configuration requests to it. These requests have to be either in YAML format or JSON format.

First of all, we need to start the Kubernetes cluster which can be done by using "minikube start" command.

Start a Kubernetes cluster (minikube)

Minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes. We only need a Docker, a similarly compatible container, or a Virtual Machine environment.

For more details please visit the official minikube website

Below there is the command to start minikbue:

minikube start

Use the below command to check minikube status:

minikube status


Congratulations! The minikube has been started successfully!

Now, employ the command mentioned below to check running nodes:

kubectl get nodes


minikube (Master node) is ready.

At this point, utilize the following commands to check pods:

kubectl get pods


The pod is not available

It is time to start IRIS Pod with the help of the next command:

kubectl run iris --image=containers.intersystems.com/intersystems/iris-community:latest-cd --expose --port 52773

The abovementioned command does the following:

  • Runs a pod called iris
  • Pulls a container image called containers.intersystems.com/intersystems/iris-community:2021.1.0.215.3 from the InterSystems container registry
  • Runs the command in the background
    • Note: This command does not include the -it flag, which runs a container or pod interactively.
  • Exposes the 52773 TCP port, which provides access to the InterSystems IRIS Management Portal

Please, take into account that the command listed above created "iris" pod and service.

Run the following command again to check running pods:

kubectl get pods


iris pod is running.

To retrieve information about pods in a Kubernetes cluster, including such additional details as node name, IP address, and node selector, operate the next command:

kubectl get pod -owide



To check the running services, use the below command

kubectl get svc


iris service is running as well.

Connect to the InterSystems IRIS Terminal by using the below command:

kubectl exec -it iris -- iris session iris

Now, let's utilize configuration files to achieve the tasks mentioned above.

First, delete the iris pod and services by using the next commands:

kubectl delete pod iris
kubectl delete service iris

Create the below deployment.yaml file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: iris-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: iris
  template:
    metadata:
      labels:
        app: iris
    spec:
      containers:
      - name: iris
        image: containers.intersystems.com/intersystems/iris-community:latest-cd
        ports:
        - containerPort: 52773
  • apiVersion: Specifies the version of the Kubernetes API being used. In this case, it is apps/v1.
  • kind:  Determines the type of Kubernetes resource. Here, it is a Deployment, which manages a set of replicated pods.
  • metadata: Contains metadata about the Deployment, for instance, its name.
  • spec: Describes the desired state of the Deployment.
    • replicas: Defines the number of desired replicas (instances) of the application. Here, it is set to 1.
    • selector: Clarifies how the Deployment selects which pods to manage. Pods that match the labels specified here will be managed by this Deployment.
    • template: Explains the pod template that will be employed to create new pods.
      • metadata: Contains labels for the pod.
      • spec: Mentions the specification of the pod.
        • containers: Mentions the containers within the pod.
          • name: Defines the name of the container.
          • image: Defines the Docker image to use for the container.
          • ports: Specifies the ports that the container exposes.

Run the below command to apply the deployment configuration:

kubectl apply -f deployment.yaml

Now create the below-mentioned service.yaml file:

apiVersion: v1
kind: Service
metadata:
  name: iris-service
spec:
  selector:
    app: iris
  ports:
    - protocol: TCP
      port: 52773
      targetPort: 52773
  type: ClusterIP
  • apiVersion: Pinpoints the version of the Kubernetes API being used. In this case, it is v1.
  • kind: Mentions the type of Kubernetes resource. Here, it is a Service, which provides network access to a set of pods.
  • metadata: Contains metadata about the Service, for example, its name.
  • spec: Describes the desired state of the Service.
    • selector: Specifies which pods the Service should route traffic to. It selects pods with the label app: iris.
    • ports: Defines the ports that the Service should listen on and forward traffic to.
    • type: Specifies the type of Service. Here, it is ClusterIP, which exposes the Service on an internal IP within the Kubernetes cluster.

Run the next command to apply the service configuration:

kubectl apply -f service.yaml

Run the following command to connect to the IRIS terminal:

kubectl exec -it iris-deployment-6475c7c6b9-7dzjv -- iris session iris

Kudos! The terminal is running successfully!
 

To summarize, we have explored the Kubernetes architecture and its configurations:

  • We started with Kubernetes architecture;
  • Then we examined minikube and created an iris pod and service;
  • Finally, we used configuration files to run the iris pod and service.

Thanks!

Discussion (0)1
Log in or sign up to continue